8,562 research outputs found

    Neutrinos in the Simplest Little Higgs Model

    Full text link
    The simplest little Higgs model based on a SU(3) global symmetry contains a SU(3)weakSU(3)_{weak} triplet and a singlet per a generation in the lepton sector. A neutral component of the triplet and the singlet turn into a neutral vector-like SU(2)LSU(2)_L singlet after electroweak symmetry breaking while the other neutral component of the triplet is the SM neutrino. At tree level, Yukawa couplings of the lepton sector not only allow the neutral vector-like lepton to couple to the SM neutrino, but also give them a Dirac mass. Majorana mass terms for the SM neutrinos and their partners arise at one loops, leading to neutrino flavor mixing in addition to neutrino-heavy neutral lepton mixing.Comment: 13 pages, 3 figures, contents change

    Weakly-supervised Visual Grounding of Phrases with Linguistic Structures

    Full text link
    We propose a weakly-supervised approach that takes image-sentence pairs as input and learns to visually ground (i.e., localize) arbitrary linguistic phrases, in the form of spatial attention masks. Specifically, the model is trained with images and their associated image-level captions, without any explicit region-to-phrase correspondence annotations. To this end, we introduce an end-to-end model which learns visual groundings of phrases with two types of carefully designed loss functions. In addition to the standard discriminative loss, which enforces that attended image regions and phrases are consistently encoded, we propose a novel structural loss which makes use of the parse tree structures induced by the sentences. In particular, we ensure complementarity among the attention masks that correspond to sibling noun phrases, and compositionality of attention masks among the children and parent phrases, as defined by the sentence parse tree. We validate the effectiveness of our approach on the Microsoft COCO and Visual Genome datasets.Comment: CVPR 201
    • …
    corecore